Long-term non-prehensile planar manipulation is a challenging task for robot planning and feedback control. It is characterized by underactuation, hybrid control, and contact uncertainty. One main difficulty is to determine contact points and directions, which involves joint logic and geometrical reasoning in the modes of the dynamics model. To tackle this issue, we propose a demonstration-guided hierarchical optimization framework to achieve offline task and motion planning (TAMP). Our work extends the formulation of the dynamics model of the pusher-slider system to include separation mode with face switching cases, and solves a warm-started TAMP problem by exploiting human demonstrations. We show that our approach can cope well with the local minima problems currently present in the state-of-the-art solvers and determine a valid solution to the task. We validate our results in simulation and demonstrate its applicability on a pusher-slider system with real Franka Emika robot in the presence of external disturbances.
translated by 谷歌翻译
Many problems in robotics are fundamentally problems of geometry, which lead to an increased research effort in geometric methods for robotics in recent years. The results were algorithms using the various frameworks of screw theory, Lie algebra and dual quaternions. A unification and generalization of these popular formalisms can be found in geometric algebra. The aim of this paper is to showcase the capabilities of geometric algebra when applied to robot manipulation tasks. In particular the modelling of cost functions for optimal control can be done uniformly across different geometric primitives leading to a low symbolic complexity of the resulting expressions and a geometric intuitiveness. We demonstrate the usefulness, simplicity and computational efficiency of geometric algebra in several experiments using a Franka Emika robot. The presented algorithms were implemented in c++20 and resulted in the publicly available library \textit{gafro}. The benchmark shows faster computation of the kinematics than state-of-the-art robotics libraries.
translated by 谷歌翻译
近年来,机器人技术的最佳控制越来越流行,并且已应用于许多涉及复杂动力系统的应用中。闭环最佳控制策略包括模型预测控制(MPC)和通过ILQR优化的时变线性控制器。但是,此类反馈控制器依赖于当前状态的信息,从而限制了机器人需要记住其在采取行动和相应计划的机器人应用程序范围。最近提出的系统级合成(SLS)框架通过带有内存的较富裕控制器结构来规避此限制。在这项工作中,我们建议通过将SLS扩展到跟踪涉及非线性系统和非二次成本功能的问题,以最佳设计具有记忆力的反应性预期机器人技能。我们以两种情况来展示我们的方法,这些方案利用任务精确度和对象在模拟和真实环境中使用7轴的Franka Emika机器人提供的挑选和位置任务。
translated by 谷歌翻译
许多数值优化技术的收敛性对提供给求解器的初始猜测高度敏感。我们提出了一种基于张量方法的方法,以初始化靠近全局Optima的现有优化求解器。该方法仅使用成本函数的定义,不需要访问任何良好解决方案的数据库。我们首先将成本函数(这是任务参数和优化变量的函数)转换为概率密度函数。与将任务参数设置为常数的现有方法不同,我们将它们视为另一组随机变量,并使用替代概率模型近似任务参数的关节概率分布和优化变量。对于给定的任务,我们就给定的任务参数从条件分布中生成样本,并将其用作优化求解器的初始化。由于调节和来自任意密度函数的调节和采样具有挑战性,因此我们使用张量列车分解来获得替代概率模型,我们可以从中有效地获得条件模型和样品。该方法可以为给定任务产生来自不同模式的多个解决方案。我们首先通过将其应用于各种具有挑战性的基准函数来评估该方法以进行数值优化,这些功能很难使用基于梯度的优化求解器以幼稚的初始化来求解,这表明所提出的方法可以生成靠近全局优化的样品,并且来自多种模式。 。然后,我们通过将所提出的方法应用于7-DOF操纵器来证明框架的通用性及其与机器人技术的相关性。
translated by 谷歌翻译
每日操纵任务的特征是与动作和对象形状相关的几何基原始人。这样的几何描述符仅通过使用笛卡尔坐标系统而差异很差。在本文中,我们提出了一种学习方法,以从坐标系词典中提取最佳表示,以编码观察到的运动/行为。这是通过在Riemannian歧管上使用高斯分布的扩展来实现的,该分布用于通过将多个几何形状作为任务的候选表示来分析一组用户演示。我们根据迭代线性二次调节器(ILQR)提出了复制问题作为一般最佳控制问题,其中使用提取的坐标系中的高斯分布来定义成本函数。我们将方法应用于模拟和7轴Franka Emika机器人中的对象抓握和箱式打开任务。结果表明,机器人可以利用几个几何形状来执行操纵任务并将其推广到新情况下,通过维护感兴趣的坐标系中任务的不变特征。
translated by 谷歌翻译
工业机器人的机器人编程方法是耗时的,并且通常需要运营商在机器人和编程中具有知识。为了降低与重新编程相关的成本,最近已经提出了使用增强现实的各种接口,为用户提供更直观的手段,可以实时控制机器人并在不必编码的情况下编程它们。但是,大多数解决方案都要求操作员接近真正的机器人的工作空间,这意味着由于安全危险而从生产线上移除它或关闭整个生产线。我们提出了一种新颖的增强现实界面,提供了用户能够建模工作空间的虚拟表示,该工作空间可以被保存和重复使用,以便编程新任务或调整旧任务,而无需与真正的机器人共同定位。与以前的接口类似,操作员随后可以通过操纵虚拟机器人来实时地控制机器人任务或控制机器人。我们评估所提出的界面与用户学习的直观和可用性,其中18名参与者为拆卸任务编写了一个机器人操纵器。
translated by 谷歌翻译
Non-linear state-space models, also known as general hidden Markov models, are ubiquitous in statistical machine learning, being the most classical generative models for serial data and sequences in general. The particle-based, rapid incremental smoother PaRIS is a sequential Monte Carlo (SMC) technique allowing for efficient online approximation of expectations of additive functionals under the smoothing distribution in these models. Such expectations appear naturally in several learning contexts, such as likelihood estimation (MLE) and Markov score climbing (MSC). PARIS has linear computational complexity, limited memory requirements and comes with non-asymptotic bounds, convergence results and stability guarantees. Still, being based on self-normalised importance sampling, the PaRIS estimator is biased. Our first contribution is to design a novel additive smoothing algorithm, the Parisian particle Gibbs PPG sampler, which can be viewed as a PaRIS algorithm driven by conditional SMC moves, resulting in bias-reduced estimates of the targeted quantities. We substantiate the PPG algorithm with theoretical results, including new bounds on bias and variance as well as deviation inequalities. Our second contribution is to apply PPG in a learning framework, covering MLE and MSC as special examples. In this context, we establish, under standard assumptions, non-asymptotic bounds highlighting the value of bias reduction and the implicit Rao--Blackwellization of PPG. These are the first non-asymptotic results of this kind in this setting. We illustrate our theoretical results with numerical experiments supporting our claims.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
In the upcoming years, artificial intelligence (AI) is going to transform the practice of medicine in most of its specialties. Deep learning can help achieve better and earlier problem detection, while reducing errors on diagnosis. By feeding a deep neural network (DNN) with the data from a low-cost and low-accuracy sensor array, we demonstrate that it becomes possible to significantly improve the measurements' precision and accuracy. The data collection is done with an array composed of 32 temperature sensors, including 16 analog and 16 digital sensors. All sensors have accuracies between 0.5-2.0$^\circ$C. 800 vectors are extracted, covering a range from to 30 to 45$^\circ$C. In order to improve the temperature readings, we use machine learning to perform a linear regression analysis through a DNN. In an attempt to minimize the model's complexity in order to eventually run inferences locally, the network with the best results involves only three layers using the hyperbolic tangent activation function and the Adam Stochastic Gradient Descent (SGD) optimizer. The model is trained with a randomly-selected dataset using 640 vectors (80% of the data) and tested with 160 vectors (20%). Using the mean squared error as a loss function between the data and the model's prediction, we achieve a loss of only 1.47x10$^{-4}$ on the training set and 1.22x10$^{-4}$ on the test set. As such, we believe this appealing approach offers a new pathway towards significantly better datasets using readily-available ultra low-cost sensors.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译